Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Nanotechnology ; 35(22)2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38387099

RESUMO

Two-dimensional (2D) materials have been increasingly widely used in biomedical and cosmetical products nowadays, yet their safe usage in human body and environment necessitates a comprehensive understanding of their nanotoxicity. In this work, the effect of pristine graphene and graphene oxide (GO) on the adsorption and conformational changes of skin keratin using molecular dynamics simulations. It is found that skin keratin can be absorbed through various noncovalent driving forces, such as van der Waals (vdW) and electrostatics. In the case of GO, the oxygen-containing groups prevent tighter contact between skin keratin and the graphene basal plane through steric effects and electrostatic repulsion. On the other hand, electrostatic attraction and hydrogen bonding enhance their binding affinity to positively charged residues such as lysine and arginine. The secondary structure of skin keratin is better preserved in GO system, suggesting that GO has good biocompatibility. The charged groups on GO surface perform as the hydrogen bond acceptors, which is like to the natural receptors of keratin in this physiological environment. This work contributes to a better knowledge of the nanotoxicity of cutting-edge 2D materials on human health, thereby advancing their potential biological applications.


Assuntos
Grafite , Nanoestruturas , Humanos , Grafite/química , Queratinas , Simulação de Dinâmica Molecular , Nanoestruturas/toxicidade , Nanoestruturas/química
2.
IEEE J Biomed Health Inform ; 28(3): 1516-1527, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38206781

RESUMO

Breast lesion segmentation in ultrasound images is essential for computer-aided breast-cancer diagnosis. To improve the segmentation performance, most approaches design sophisticated deep-learning models by mining the patterns of foreground lesions and normal backgrounds simultaneously or by unilaterally enhancing foreground lesions via various focal losses. However, the potential of normal backgrounds is underutilized, which could reduce false positives by compacting the feature representation of all normal backgrounds. From a novel viewpoint of bilateral enhancement, we propose a negative-positive cross-attention network to concentrate on normal backgrounds and foreground lesions, respectively. Derived from the complementing opposites of bipolarity in TaiChi, the network is denoted as TaiChiNet, which consists of the negative normal-background and positive foreground-lesion paths. To transmit the information across the two paths, a cross-attention module, a complementary MLP-head, and a complementary loss are built for deep-layer features, shallow-layer features, and mutual-learning supervision, separately. To the best of our knowledge, this is the first work to formulate breast lesion segmentation as a mutual supervision task from the foreground-lesion and normal-background views. Experimental results have demonstrated the effectiveness of TaiChiNet on two breast lesion segmentation datasets with a lightweight architecture. Furthermore, extensive experiments on the thyroid nodule segmentation and retinal optic cup/disc segmentation datasets indicate the application potential of TaiChiNet.


Assuntos
Neoplasias da Mama , Disco Óptico , Humanos , Feminino , Ultrassonografia , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Conhecimento , Processamento de Imagem Assistida por Computador
3.
Artigo em Inglês | MEDLINE | ID: mdl-37729565

RESUMO

This work pays the first research effort to address unsupervised 3-D action representation learning with point cloud sequence, which is different from existing unsupervised methods that rely on 3-D skeleton information. Our proposition is built on the state-of-the-art 3-D action descriptor 3-D dynamic voxel (3DV) with contrastive learning (CL). The 3DV can compress the point cloud sequence into a compact point cloud of 3-D motion information. Spatiotemporal data augmentations are conducted on it to drive CL. However, we find that existing CL methods (e.g., SimCLR or MoCo v2) often suffer from high pattern variance toward the augmented 3DV samples from the same action instance, that is, the augmented 3DV samples are still of high feature complementarity after CL, while the complementary discriminative clues within them have not been well exploited yet. To address this, a feature augmentation adapted CL (FACL) approach is proposed, which facilitates 3-D action representation via concerning the features from all augmented 3DV samples jointly, in spirit of feature augmentation. FACL runs in a global-local way: one branch learns global feature that involves the discriminative clues from the raw and augmented 3DV samples, and the other focuses on enhancing the discriminative power of local feature learned from each augmented 3DV sample. The global and local features are fused to characterize 3-D action jointly via concatenation. To fit FACL, a series of spatiotemporal data augmentation approaches is also studied on 3DV. Wide-range experiments verify the superiority of our unsupervised learning method for 3-D action feature learning. It outperforms the state-of-the-art skeleton-based counterparts by 6.4% and 3.6% with the cross-setup and cross-subject test settings on NTU RGB + D 120, respectively. The source code is available at https://github.com/tangent-T/FACL.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 13586-13598, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37428671

RESUMO

Time series analysis is essential to many far-reaching applications of data science and statistics including economic and financial forecasting, surveillance, and automated business processing. Though being greatly successful of Transformer in computer vision and natural language processing, the potential of employing it as the general backbone in analyzing the ubiquitous times series data has not been fully released yet. Prior Transformer variants on time series highly rely on task-dependent designs and pre-assumed "pattern biases", revealing its insufficiency in representing nuanced seasonal, cyclic, and outlier patterns which are highly prevalent in time series. As a consequence, they can not generalize well to different time series analysis tasks. To tackle the challenges, we propose DifFormer, an effective and efficient Transformer architecture that can serve as a workhorse for a variety of time-series analysis tasks. DifFormer incorporates a novel multi-resolutional differencing mechanism, which is able to progressively and adaptively make nuanced yet meaningful changes prominent, meanwhile, the periodic or cyclic patterns can be dynamically captured with flexible lagging and dynamic ranging operations. Extensive experiments demonstrate DifFormer significantly outperforms state-of-the-art models on three essential time-series analysis tasks, including classification, regression, and forecasting. In addition to its superior performances, DifFormer also excels in efficiency - a linear time/memory complexity with empirically lower time consumption.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37022080

RESUMO

Medical image segmentation is a vital stage in medical image analysis. Numerous deep-learning methods are booming to improve the performance of 2-D medical image segmentation, owing to the fast growth of the convolutional neural network. Generally, the manually defined ground truth is utilized directly to supervise models in the training phase. However, direct supervision of the ground truth often results in ambiguity and distractors as complex challenges appear simultaneously. To alleviate this issue, we propose a gradually recurrent network with curriculum learning, which is supervised by gradual information of the ground truth. The whole model is composed of two independent networks. One is the segmentation network denoted as GREnet, which formulates 2-D medical image segmentation as a temporal task supervised by pixel-level gradual curricula in the training phase. The other is a curriculum-mining network. To a certain degree, the curriculum-mining network provides curricula with an increasing difficulty in the ground truth of the training set by progressively uncovering hard-to-segmentation pixels via a data-driven manner. Given that segmentation is a pixel-level dense-prediction challenge, to the best of our knowledge, this is the first work to function 2-D medical image segmentation as a temporal task with pixel-level curriculum learning. In GREnet, the naive UNet is adopted as the backbone, while ConvLSTM is used to establish the temporal link between gradual curricula. In the curriculum-mining network, UNet ++ supplemented by transformer is designed to deliver curricula through the outputs of the modified UNet ++ at different layers. Experimental results have demonstrated the effectiveness of GREnet on seven datasets, i.e., three lesion segmentation datasets in dermoscopic images, an optic disc and cup segmentation dataset and a blood vessel segmentation dataset in retinal images, a breast lesion segmentation dataset in ultrasound images, and a lung segmentation dataset in computed tomography (CT).

6.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10443-10465, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37030852

RESUMO

Temporal sentence grounding in videos (TSGV), a.k.a., natural language video localization (NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that semantically corresponds to a language query from an untrimmed video. Connecting computer vision and natural language, TSGV has drawn significant attention from researchers in both communities. This survey attempts to provide a summary of fundamental concepts in TSGV and current research status, as well as future research directions. As the background, we present a common structure of functional components in TSGV, in a tutorial style: from feature extraction from raw video and language query, to answer prediction of the target moment. Then we review the techniques for multimodal understanding and interaction, which is the key focus of TSGV for effective alignment between the two modalities. We construct a taxonomy of TSGV techniques and elaborate the methods in different categories with their strengths and weaknesses. Lastly, we discuss issues with the current TSGV research and share our insights about promising research directions.


Assuntos
Algoritmos , Idioma
7.
Med Image Anal ; 83: 102664, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36332357

RESUMO

Pneumonia can be difficult to diagnose since its symptoms are too variable, and the radiographic signs are often very similar to those seen in other illnesses such as a cold or influenza. Deep neural networks have shown promising performance in automated pneumonia diagnosis using chest X-ray radiography, allowing mass screening and early intervention to reduce the severe cases and death toll. However, they usually require many well-labelled chest X-ray images for training to achieve high diagnostic accuracy. To reduce the need for training data and annotation resources, we propose a novel method called Contrastive Domain Adaptation with Consistency Match (CDACM). It transfers the knowledge from different but relevant datasets to the unlabelled small-size target dataset and improves the semantic quality of the learnt representations. Specifically, we design a conditional domain adversarial network to exploit discriminative information conveyed in the predictions to mitigate the domain gap between the source and target datasets. Furthermore, due to the small scale of the target dataset, we construct a feature cloud for each target sample and leverage contrastive learning to extract more discriminative features. Lastly, we propose adaptive feature cloud expansion to push the decision boundary to a low-density area. Unlike most existing transfer learning methods that aim only to mitigate the domain gap, our method instead simultaneously considers the domain gap and the data deficiency problem of the target dataset. The conditional domain adaptation and the feature cloud generation of our method are learning jointly to extract discriminative features in an end-to-end manner. Besides, the adaptive feature cloud expansion improves the model's generalisation ability in the target domain. Extensive experiments on pneumonia and COVID-19 diagnosis tasks demonstrate that our method outperforms several state-of-the-art unsupervised domain adaptation approaches, which verifies the effectiveness of CDACM for automated pneumonia diagnosis using chest X-ray imaging.


Assuntos
Teste para COVID-19 , COVID-19 , Humanos
8.
IEEE Trans Pattern Anal Mach Intell ; 45(2): 2551-2566, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35503823

RESUMO

Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.

9.
Artigo em Inglês | MEDLINE | ID: mdl-35969543

RESUMO

Spiking neural networks (SNNs) have advantages in latency and energy efficiency over traditional artificial neural networks (ANNs) due to their event-driven computation mechanism and the replacement of energy-consuming weight multiplication with addition. However, to achieve high accuracy, it usually requires long spike trains to ensure accuracy, usually more than 1000 time steps. This offsets the computation efficiency brought by SNNs because a longer spike train means a larger number of operations and larger latency. In this article, we propose a radix-encoded SNN, which has ultrashort spike trains. Specifically, it is able to use less than six time steps to achieve even higher accuracy than its traditional counterpart. We also develop a method to fit our radix encoding technique into the ANN-to-SNN conversion approach so that we can train radix-encoded SNNs more efficiently on mature platforms and hardware. Experiments show that our radix encoding can achieve 25 × improvement in latency and 1.7% improvement in accuracy compared to the state-of-the-art method using the VGG-16 network on the CIFAR-10 dataset.

10.
Artigo em Inglês | MEDLINE | ID: mdl-35998171

RESUMO

Efficient neural network training is essential for in situ training of edge artificial intelligence (AI) and carbon footprint reduction in general. Train neural network on the edge is challenging because there is a large gap between limited resources on edge and the resource requirement of current training methods. Existing training methods are based on the assumption that the underlying computing infrastructure has sufficient memory and energy supplies. These methods involve two copies of the model parameters, which is usually beyond the capacity of on-chip memory in processors. The data movement between off-chip and on-chip memory incurs large amounts of energy. We propose resource constrained training (RCT) to realize resource-efficient training for edge devices and servers. RCT only keeps a quantized model throughout the training so that the memory requirement for model parameters in training is reduced. It adjusts per-layer bitwidth dynamically to save energy when a model can learn effectively with lower precision. We carry out experiments with representative models and tasks in image classification, natural language processing, and crowd counting applications. Experiments show that on average, 8-15-bit weight update is sufficient for achieving SOTA performance in these applications. RCT saves 63.5%-80% memory for model parameters and saves more energy for communications. Through experiments, we observe that the common practice on the first/last layer in model compression does not apply to efficient training. Also, interestingly, the more challenging a dataset is, the lower bitwidth is required for efficient training.

11.
Med Image Anal ; 81: 102535, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35872361

RESUMO

Accurate skin lesion diagnosis requires a great effort from experts to identify the characteristics from clinical and dermoscopic images. Deep multimodal learning-based methods can reduce intra- and inter-reader variability and improve diagnostic accuracy compared to the single modality-based methods. This study develops a novel method, named adversarial multimodal fusion with attention mechanism (AMFAM), to perform multimodal skin lesion classification. Specifically, we adopt a discriminator that uses adversarial learning to enforce the feature extractor to learn the correlated information explicitly. Moreover, we design an attention-based reconstruction strategy to encourage the feature extractor to concentrate on learning the features of the lesion area, thus, enhancing the feature vector from each modality with more discriminative information. Unlike existing multimodal-based approaches, which only focus on learning complementary features from dermoscopic and clinical images, our method considers both correlated and complementary information of the two modalities for multimodal fusion. To verify the effectiveness of our method, we conduct comprehensive experiments on a publicly available multimodal and multi-task skin lesion classification dataset: 7-point criteria evaluation database. The experimental results demonstrate that our proposed method outperforms the current state-of-the-art methods and improves the average AUC score by above 2% on the test set.


Assuntos
Diagnóstico por Imagem , Dermatopatias , Pele , Bases de Dados Factuais , Humanos , Aprendizado de Máquina , Pele/patologia , Dermatopatias/classificação , Dermatopatias/diagnóstico
12.
Artigo em Inglês | MEDLINE | ID: mdl-35749327

RESUMO

Current one-stage methods for visual grounding encode the language query as one holistic sentence embedding before fusion with visual features for target localization. Such a formulation provides insufficient ability to model query at the word level, and therefore is prone to neglect words that may not be the most important ones for a sentence but are critical for the referred object. In this article, we propose Word2Pix: a one-stage visual grounding network based on the encoder-decoder transformer architecture that enables learning for textual to visual feature correspondence via word to pixel attention. Each word from the query sentence is given an equal opportunity when attending to visual pixels through multiple stacks of transformer decoder layers. In this way, the decoder can learn to model the language query and fuse language with the visual features for target prediction simultaneously. We conduct the experiments on RefCOCO, RefCOCO + , and RefCOCOg datasets, and the proposed Word2Pix outperforms the existing one-stage methods by a notable margin. The results obtained also show that Word2Pix surpasses the two-stage visual grounding models, while at the same time keeping the merits of the one-stage paradigm, namely, end-to-end training and fast inference speed. Code is available at https://github.com/azurerain7/Word2Pix.

13.
Artigo em Inglês | MEDLINE | ID: mdl-35560072

RESUMO

Edge devices demand low energy consumption, cost, and small form factor. To efficiently deploy convolutional neural network (CNN) models on the edge device, energy-aware model compression becomes extremely important. However, existing work did not study this problem well because of the lack of considering the diversity of dataflow types in hardware architectures. In this article, we propose EDCompress (EDC), an energy-aware model compression method for various dataflows. It can effectively reduce the energy consumption of various edge devices, with different dataflow types. Considering the very nature of model compression procedures, we recast the optimization process to a multistep problem and solve it by reinforcement learning algorithms. We also propose a multidimensional multistep (MDMS) optimization method, which shows higher compressing capability than the traditional multistep method. Experiments show that EDC could improve 20x, 17x, and 26x energy efficiency in VGG-16, MobileNet, and LeNet-5 networks, respectively, with negligible loss of accuracy. EDC could also indicate the optimal dataflow type for specific neural networks in terms of energy consumption, which can guide the deployment of CNN on hardware.

14.
Nanomaterials (Basel) ; 12(7)2022 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-35407299

RESUMO

Graphene-based nanocomposite films (NCFs) are in high demand due to their superior photoelectric and thermal properties, but their stability and mechanical properties form a bottleneck. Herein, a facile approach was used to prepare nacre-mimetic NCFs through the non-covalent self-assembly of graphene oxide (GO) and biocompatible proteins. Various characterization techniques were employed to characterize the as-prepared NCFs and to track the interactions between GO and proteins. The conformational changes of various proteins induced by GO determined the film-forming ability of NCFs, and the binding of bull serum albumin (BSA)/hemoglobin (HB) on GO's surface was beneficial for improving the stability of as-prepared NCFs. Compared with the GO film without any additive, the indentation hardness and equivalent elastic modulus could be improved by 50.0% and 68.6% for GO-BSA NCF; and 100% and 87.5% for GO-HB NCF. Our strategy should be facile and effective for fabricating well-designed bio-nanocomposites for universal functional applications.

15.
IEEE Trans Neural Netw Learn Syst ; 33(2): 798-810, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-33090960

RESUMO

Cross-modal retrieval (CMR) enables flexible retrieval experience across different modalities (e.g., texts versus images), which maximally benefits us from the abundance of multimedia data. Existing deep CMR approaches commonly require a large amount of labeled data for training to achieve high performance. However, it is time-consuming and expensive to annotate the multimedia data manually. Thus, how to transfer valuable knowledge from existing annotated data to new data, especially from the known categories to new categories, becomes attractive for real-world applications. To achieve this end, we propose a deep multimodal transfer learning (DMTL) approach to transfer the knowledge from the previously labeled categories (source domain) to improve the retrieval performance on the unlabeled new categories (target domain). Specifically, we employ a joint learning paradigm to transfer knowledge by assigning a pseudolabel to each target sample. During training, the pseudolabel is iteratively updated and passed through our model in a self-supervised manner. At the same time, to reduce the domain discrepancy of different modalities, we construct multiple modality-specific neural networks to learn a shared semantic space for different modalities by enforcing the compactness of homoinstance samples and the scatters of heteroinstance samples. Our method is remarkably different from most of the existing transfer learning approaches. To be specific, previous works usually assume that the source domain and the target domain have the same label set. In contrast, our method considers a more challenging multimodal learning situation where the label sets of the two domains are different or even disjoint. Experimental studies on four widely used benchmarks validate the effectiveness of the proposed method in multimodal transfer learning and demonstrate its superior performance in CMR compared with 11 state-of-the-art methods.

16.
IEEE Trans Cybern ; 52(3): 1736-1749, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32520713

RESUMO

Face verification can be regarded as a two-class fine-grained visual-recognition problem. Enhancing the feature's discriminative power is one of the key problems to improve its performance. Metric-learning technology is often applied to address this need while achieving a good tradeoff between underfitting, and overfitting plays a vital role in metric learning. Hence, we propose a novel ensemble cascade metric-learning (ECML) mechanism. In particular, hierarchical metric learning is executed in a cascade way to alleviate underfitting. Meanwhile, at each learning level, the features are split into nonoverlapping groups. Then, metric learning is executed among the feature groups in the ensemble manner to resist overfitting. Considering the feature distribution characteristics of faces, a robust Mahalanobis metric-learning method (RMML) with a closed-form solution is additionally proposed. It can avoid the computation failure issue on an inverse matrix faced by some well-known metric-learning approaches (e.g., KISSME). Embedding RMML into the proposed ECML mechanism, our metric-learning paradigm (EC-RMML) can run in the one-pass learning manner. The experimental results demonstrate that EC-RMML is superior to state-of-the-art metric-learning methods for face verification. The proposed ECML mechanism is also applicable to other metric-learning approaches.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Face , Aprendizagem , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos
17.
IEEE Trans Cybern ; 52(8): 7732-7741, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33566780

RESUMO

Image annotation aims to jointly predict multiple tags for an image. Although significant progress has been achieved, existing approaches usually overlook aligning specific labels and their corresponding regions due to the weak supervised information (i.e., "bag of labels" for regions), thus failing to explicitly exploit the discrimination from different classes. In this article, we propose the deep label-specific feature (Deep-LIFT) learning model to build the explicit and exact correspondence between the label and the local visual region, which improves the effectiveness of feature learning and enhances the interpretability of the model itself. Deep-LIFT extracts features for each label by aligning each label and its region. Specifically, Deep-LIFTs are achieved through learning multiple correlation maps between image convolutional features and label embeddings. Moreover, we construct two variant graph convolutional networks (GCNs) to further capture the interdependency among labels. Empirical studies on benchmark datasets validate that the proposed model achieves superior performance on multilabel classification over other existing state-of-the-art methods.


Assuntos
Algoritmos , Curadoria de Dados
18.
IEEE J Biomed Health Inform ; 26(3): 1080-1090, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34314362

RESUMO

Pneumonia is one of the most common treatable causes of death, and early diagnosis allows for early intervention. Automated diagnosis of pneumonia can therefore improve outcomes. However, it is challenging to develop high-performance deep learning models due to the lack of well-annotated data for training. This paper proposes a novel method, called Deep Supervised Domain Adaptation (DSDA), to automatically diagnose pneumonia from chest X-ray images. Specifically, we propose to transfer the knowledge from a publicly available large-scale source dataset (ChestX-ray14) to a well-annotated but small-scale target dataset (the TTSH dataset). DSDA aligns the distributions of the source domain and the target domain according to the underlying semantics of the training samples. It includes two task-specific sub-networks for the source domain and the target domain, respectively. These two sub-networks share the feature extraction layers and are trained in an end-to-end manner. Unlike most existing domain adaptation approaches that perform the same tasks in the source domain and the target domain, we attempt to transfer the knowledge from a multi-label classification task in the source domain to a binary classification task in the target domain. To evaluate the effectiveness of our method, we compare it with several existing peer methods. The experimental results show that our method can achieve promising performance for automated pneumonia diagnosis.


Assuntos
Aprendizado Profundo , Pneumonia , Diagnóstico Precoce , Humanos , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Raios X
19.
IEEE Trans Cybern ; 52(12): 12649-12660, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34197333

RESUMO

In this article, we propose a simple yet effective approach, called point adversarial self mining (PASM), to improve the recognition accuracy in facial expression recognition (FER). Unlike previous works focusing on designing specific architectures or loss functions to solve this problem, PASM boosts the network capability by simulating human learning processes: providing updated learning materials and guidance from more capable teachers. Specifically, to generate new learning materials, PASM leverages a point adversarial attack method and a trained teacher network to locate the most informative position related to the target task, generating harder learning samples to refine the network. The searched position is highly adaptive since it considers both the statistical information of each sample and the teacher network capability. Other than being provided new learning materials, the student network also receives guidance from the teacher network. After the student network finishes training, the student network changes its role and acts as a teacher, generating new learning materials and providing stronger guidance to train a better student network. The adaptive learning materials generation and teacher/student update can be conducted more than one time, improving the network capability iteratively. Extensive experimental results validate the efficacy of our method over the existing state of the arts for FER.


Assuntos
Reconhecimento Facial , Humanos , Aprendizagem
20.
Curr Med Chem ; 29(4): 700-718, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-33992055

RESUMO

Type Ⅰ enveloped viruses bind to cell receptors through surface glycoproteins to initiate infection or undergo receptor-mediated endocytosis and initiate membrane fusion in the acidic environment of endocytic compartments, releasing genetic material into the cell. In the process of membrane fusion, envelope protein exposes fusion peptide, followed by an insertion into the cell membrane or endosomal membrane. Further conformational changes ensue in which the type 1 envelope protein forms a typical six-helix bundle structure, shortening the distance between viral and cell membranes so that fusion can occur. Entry inhibitors targeting viral envelope proteins, or host factors, are effective antiviral agents and have been widely studied. Some have been used clinically, such as T20 and Maraviroc for human immunodeficiency virus 1 (HIV-1) or Myrcludex B for hepatitis D virus (HDV). This review focuses on entry inhibitors that target the six-helical bundle core against highly pathogenic enveloped viruses with class I fusion proteins, including retroviruses, coronaviruses, influenza A viruses, paramyxoviruses, and filoviruses.


Assuntos
HIV-1 , Internalização do Vírus , Endocitose , HIV-1/metabolismo , Humanos , Fusão de Membrana , Proteínas do Envelope Viral/metabolismo , Proteínas do Envelope Viral/farmacologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA